Supported Models
Introduction
To help users achieve the best experience, Syncause has tested and evaluated mainstream large language models and provides a recommended support list. We recommend users prioritize models from this list to ensure optimal performance and stability. For models not on the support list or unpublished models, their intelligent reasoning capabilities must at least match the level of recommended models to ensure good usage results.
info
- For model API integration, please refer to Model Integration
Recommended Models List
Recommended Models List
Claude
Model Name | Speed | Quality |
---|---|---|
anthropic/claude-4-sonnet | 4min+ | ⭐⭐⭐⭐⭐ |
anthropic/claude-3.7-sonnet | 4.5min+ | ⭐⭐⭐⭐⭐ |
DeepSeek
Model Name | Speed | Quality |
---|---|---|
deepseek-v3 | 6min+ | ⭐⭐⭐⭐⭐️ |
Gemini
Model Name | Speed | Quality |
---|---|---|
gemini-2.5 flash | 2.5min+ | ⭐⭐⭐⭐⭐️ |
GPT
Model Name | Speed | Quality |
---|---|---|
openai/gpt-4o | 3min+ | ⭐⭐⭐⭐⭐ |
openai/gpt-4.1 | 4min+ | ⭐⭐⭐⭐⭐ |
Grok
Model Name | Speed | Quality |
---|---|---|
xai/grok-4-0709 | 5min+ | ⭐⭐⭐⭐⭐ |
xai/grok-3 | 4min+ | ⭐⭐⭐⭐⭐ |
Qwen
Model Name | Speed | Quality |
---|---|---|
qwen-plus(qwen3) | 5min+ | ⭐⭐⭐⭐⭐️ |
Known Issue Models
Claude
Model Name | Speed | Issues |
---|---|---|
anthropic/claude-3.5-sonnet | Cannot complete | 1. Unable to correctly call external tools |
Gemini
Model Name | Speed | Issues |
---|---|---|
gemini-2.5 pro | 5min+ | 1. Can correctly execute the entire RCA process, but the model currently has empty output issues, waiting for official resolution 2. Default thinking mode enabled, cannot be disabled |
GPT
Model Name | Speed | Issues |
---|---|---|
openai/gpt-4o-mini | Cannot complete | 1. Unable to correctly call external tools |
Custom Models
info
When using Syncause, the integrated large language models should meet the following capability requirements:
- Reasoning ability: Able to correctly understand user intent and make reasonable logical judgments.
- Instruction following: Good instruction execution capability, able to accurately call and use various external tools.
- Context length: Recommended to support 128k tokens or more for better handling of long texts and complex tasks.
- Execution speed: Large models with thinking processes typically have relatively slower response times.
The simplest way is to select and invoke a model with similar performance from the recommended list.